Regularization parameter selection for penalized-maximum likelihood methods in PET

نویسندگان

  • Johnathan M. Bardsley
  • John Goldes
چکیده

Penalized maximum likelihood methods are commonly used in positron emission tomography (PET). Due to the fact that a Poisson data-noise model is typically assumed, standard regularization parameter choice methods, such as the discrepancy principle or generalized cross validation, can not be directly applied. In recent work of the authors, regularization parameter choice methods for penalized negative-log Poisson likelihood problems are introduced, and the application is image deconvolution. In this paper, we extend those methods to the application of PET, introducing a minor modification that seems to improve the performance of the methods. Moreover, we show how these methods can be used to choose the hyper-parameters in a Bayesian hierarchical regularization approach, also of the authors’ previous work.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Techniques for regularization parameter and hyper-parameter selection in PET and SPECT Imaging

Penalized maximum likelihood methods are commonly used in positron emission tomography (PET) and single photon emission computed tomography (SPECT). Due to the fact that a Poisson data-noise model is typically assumed, standard regularization parameter choice methods, such as the discrepancy principle or generalized cross validation, can not be directly applied. In recent work of the authors, r...

متن کامل

Automatic Smoothing and Variable Selection via Regularization

This thesis focuses on developing computational methods and the general theory of automatic smoothing and variable selection via regularization. Methods of regularization are a commonly used technique to get stable solution to ill-posed problems such as nonparametric regression and classification. In recent years, methods of regularization have also been successfully introduced to address a cla...

متن کامل

Regularization Parameter Selection in the Group Lasso

This article discusses the problem of choosing a regularization parameter in the group Lasso proposed by Yuan and Lin (2006), an l1-regularization approach for producing a block-wise sparse model that has been attracted a lot of interests in statistics, machine learning, and data mining. It is important to choose an appropriate regularization parameter from a set of candidate values, because it...

متن کامل

Efficiency for Regularization Parameter Selection in Penalized Likelihood Estimation of Misspecified Models

It has been shown that AIC-type criteria are asymptotically efficient selectors of the tuning parameter in non-concave penalized regression methods under the assumption that the population variance is known or that a consistent estimator is available. We relax this assumption to prove that AIC itself is asymptotically efficient and we study its performance in finite samples. In classical regres...

متن کامل

An information approach to regularization parameter selection under model misspecification

We review the information approach to regularization parameter selection and its information complexity extension for the solution of discrete ill posed problems. An information criterion for regularization parameter selection was first proposed by Shibata in the context of ridge regression as an extension of Takeuchi’s information criterion. In the information approach, the regularization para...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2010